117 research outputs found

    Analysis of variance--why it is more important than ever

    Full text link
    Analysis of variance (ANOVA) is an extremely important method in exploratory and confirmatory data analysis. Unfortunately, in complex problems (e.g., split-plot designs), it is not always easy to set up an appropriate ANOVA. We propose a hierarchical analysis that automatically gives the correct ANOVA comparisons even in complex scenarios. The inferences for all means and variances are performed under a model with a separate batch of effects for each row of the ANOVA table. We connect to classical ANOVA by working with finite-sample variance components: fixed and random effects models are characterized by inferences about existing levels of a factor and new levels, respectively. We also introduce a new graphical display showing inferences about the standard deviations of each batch of effects. We illustrate with two examples from our applied data analysis, first illustrating the usefulness of our hierarchical computations and displays, and second showing how the ideas of ANOVA are helpful in understanding a previously fit hierarchical model.Comment: This paper discussed in: [math.ST/0508526], [math.ST/0508527], [math.ST/0508528], [math.ST/0508529]. Rejoinder in [math.ST/0508530

    BIEMS: A Fortran 90 Program for Calculating Bayes Factors for Inequality and Equality Constrained Models

    Get PDF
    This paper discusses a Fortran 90 program referred to as BIEMS (Bayesian inequality and equality constrained model selection) that can be used for calculating Bayes factors of multivariate normal linear models with equality and/or inequality constraints between the model parameters versus a model containing no constraints, which is referred to as the unconstrained model. The prior that is used under the unconstrained model is the conjugate expected-constrained posterior prior and the prior under the constrained model is proportional to the unconstrained prior truncated in the constrained space. This results in Bayes factors that appropriately balance between model fit and complexity for a broad class of constrained models. When the set of equality and/or inequality constraints in the model represents a hypothesis that applied researchers have in, for instance, (M)AN(C)OVA, (multivariate) regression, or repeated measurements, the obtained Bayes factor can be used to determine how much evidence is provided by the data in favor of the hypothesis in comparison to the unconstrained model. If several hypotheses are under investigation, the Bayes factors between the constrained models can be calculated using the obtained Bayes factors from BIEMS. Furthermore, posterior model probabilities of constrained models are provided which allows the user to compare the models directly with each other

    Bain : a program for Bayesian testing of order constrained hypotheses in structural equation models

    Get PDF
    This paper presents a new statistical method and accompanying software for the evaluation of order constrained hypotheses in structural equation models (SEM). The method is based on a large sample approximation of the Bayes factor using a prior with a data-based correlational structure. An efficient algorithm is written into an R package to ensure fast computation. The package, referred to as Bain, is easy to use for applied researchers. Two classical examples from the SEM literature are used to illustrate the methodology and software

    Moving Beyond Traditional Null Hypothesis Testing: Evaluating Expectations Directly

    Get PDF
    This mini-review illustrates that testing the traditional null hypothesis is not always the appropriate strategy. Half in jest, we discuss Aristotle's scientific investigations into the shape of the earth in the context of evaluating the traditional null hypothesis. We conclude that Aristotle was actually interested in evaluating informative hypotheses. In contemporary science the situation is not much different. That is, many researchers have no particular interest in the traditional null hypothesis. More can be learned from data by evaluating specific expectations, or so-called informative hypotheses, than by testing the traditional null hypothesis. These informative hypotheses will be introduced while providing an overview of the literature on evaluating informative hypothesis

    Bayesian hypothesis testing: Editorial to the Special Issue on Bayesian data analysis

    Full text link
    In the past 20 years, there has been a steadily increasing attention and demand for Bayesian data analysis across multiple scientific disciplines, including psychology. Bayesian methods and the related Markov chain Monte Carlo sampling techniques offered renewed ways of handling old and challenging new problems that may be difficult or impossible to handle using classical approaches. Yet, such opportunities and potential improvements have not been sufficiently explored and investigated. This is 1 of 2 special issues in Psychological Methods dedicated to the topic of Bayesian data analysis, with an emphasis on Bayesian hypothesis testing, model comparison, and general guidelines for applications in psychology. In this editorial, we provide an overview of the use of Bayesian methods in psychological research and a brief history of the Bayes factor and the posterior predictive p value. Translational abstracts that summarize the articles in this issue in very clear and understandable terms are included in the Appendix.https://deepblue.lib.umich.edu/bitstream/2027.42/136926/1/Bayesian Hypothesis Testing Editorial to the Special Issue on Bayesian.pdfDescription of Bayesian Hypothesis Testing Editorial to the Special Issue on Bayesian.pdf : Main Articl

    Illustrating Bayesian Evaluation of Informative Hypotheses for Regression Models

    Get PDF
    In the present article we illustrate a Bayesian method of evaluating informative hypotheses for regression models. Our main aim is to make this method accessible to psychological researchers without a mathematical or Bayesian background. The use of informative hypotheses is illustrated using two datasets from psychological research. In addition, we analyze generated datasets with manipulated differences in effect size to investigate how Bayesian hypothesis evaluation performs when the magnitude of an effect changes. After reading this article the reader is able to evaluate his or her own informative hypotheses

    Sample size determination for Bayesian ANOVAs with informative hypotheses

    Get PDF
    Researchers can express their expectations with respect to the group means in an ANOVA model through equality and order constrained hypotheses. This paper introduces the R package SSDbain, which can be used to calculate the sample size required to evaluate (informative) hypotheses using the Approximate Adjusted Fractional Bayes Factor (AAFBF) for one-way ANOVA models as implemented in the R package bain. The sample size is determined such that the probability that the Bayes factor is larger than a threshold value is at least η when either of the hypotheses under consideration is true. The Bayesian ANOVA, Bayesian Welch's ANOVA, and Bayesian robust ANOVA are available. Using the R package SSDbain and/or the tables provided in this paper, researchers in the social and behavioral sciences can easily plan the sample size if they intend to use a Bayesian ANOVA

    Teacher's corner : evaluating informative hypotheses using the Bayes factor in structural equation models

    Get PDF
    This Teacher's Corner paper introduces Bayesian evaluation of informative hypotheses for structural equation models, using the free open-source R packages bain, for Bayesian informative hypothesis testing, and lavaan, a widely used SEM package. The introduction provides a brief non-technical explanation of informative hypotheses, the statistical underpinnings of Bayesian hypothesis evaluation, and the bain algorithm. Three tutorial examples demonstrate informative hypothesis evaluation in the context of common types of structural equation models: 1) confirmatory factor analysis, 2) latent variable regression, and 3) multiple group analysis. We discuss hypothesis formulation, the interpretation of Bayes factors and posterior model probabilities, and sensitivity analysis

    Evaluation of inequality constrained hypotheses using a generalization of the AIC

    Get PDF
    In the social and behavioral sciences, it is often not interesting to evaluate the null hypothesis by means of a p-value. Researchers are often more interested in quantifying the evidence in the data (as opposed to using p-values) with respect to their own expectations represented by equality and/or inequality constrained hypotheses (as opposed to the null hypothesis). This article proposes an Akaike-type information criterion (AIC; Akaike, 1973, 1974) called the generalized order-restricted information criterion approximation (GORICA) that evaluates (in)equality constrained hypotheses under a very broad range of statistical models. The results of five simulation studies provide empirical evidence showing that the performance of the GORICA on selecting the best hypothesis out of a set of (in)equality constrained hypotheses is convincing. To illustrate the use of the GORICA, the expectations of researchers are investigated in a logistic regression, multilevel regression, and structural equation model

    Evaluation of inequality constrained hypotheses using a generalization of the AIC

    Get PDF
    In the social and behavioral sciences, it is often not interesting to evaluate the null hypothesis by means of a p-value. Researchers are often more interested in quantifying the evidence in the data (as opposed to using p-values) with respect to their own expectations represented by equality and/or inequality constrained hypotheses (as opposed to the null hypothesis). This article proposes an Akaike-type information criterion (AIC; Akaike, 1973, 1974) called the generalized order-restricted information criterion approximation (GORICA) that evaluates (in)equality constrained hypotheses under a very broad range of statistical models. The results of five simulation studies provide empirical evidence showing that the performance of the GORICA on selecting the best hypothesis out of a set of (in)equality constrained hypotheses is convincing. To illustrate the use of the GORICA, the expectations of researchers are investigated in a logistic regression, multilevel regression, and structural equation model.</p
    corecore